In [ ]:
%matplotlib nbagg
import matplotlib.pyplot as plt
import numpy as np

Preprocessing and Pipelines


In [ ]:
from sklearn.datasets import load_digits
from sklearn.cross_validation import train_test_split
digits = load_digits()
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target)

Cross-validated pipelines including scaling, we need to estimate mean and standard deviation separately for each fold. To do that, we build a pipeline.


In [ ]:
from sklearn.pipeline import Pipeline, make_pipeline
from sklearn.svm import SVC
from sklearn.preprocessing import StandardScaler

In [ ]:
pipeline = Pipeline([("scaler", StandardScaler()), ("svm", SVC())])
# or for short:
make_pipeline(StandardScaler(), SVC())

In [ ]:
pipeline.fit(X_train, y_train)

In [ ]:
pipeline.predict(X_test)

Cross-validation with a pipeline


In [ ]:
from sklearn.cross_validation import cross_val_score
cross_val_score(pipeline, X_train, y_train)

Grid Search with a pipeline


In [ ]:
from sklearn.grid_search import GridSearchCV

param_grid = {'svm__C': 10. ** np.arange(-3, 3),
              'svm__gamma' : 10. ** np.arange(-3, 3)}

grid_pipeline = GridSearchCV(pipeline, param_grid=param_grid, n_jobs=-1)

In [ ]:
grid_pipeline.fit(X_train, y_train)

In [ ]:
grid_pipeline.score(X_test, y_test)

Exercises

Add random features to the iris dataset using np.random.uniform and np.hstack.

Build a pipeline using the SelectKBest univariate feature selection from the sklearn.feature_selection module and the LinearSVC on the iris dataset.

Use GridSearchCV to adjust C and the number of features selected in SelectKBest.


In [ ]:
# %load solutions/pipeline_iris.py